Deep Learning (DL) aims at learning the \emph{meaningful representations}. Ameaningful representation refers to the one that gives rise to significantperformance improvement of associated Machine Learning (ML) tasks by replacingthe raw data as the input. However, optimal architecture design and modelparameter estimation in DL algorithms are widely considered to be intractable.Evolutionary algorithms are much preferable for complex and non-convex problemsdue to its inherent characteristics of gradient-free and insensitivity to localoptimum. In this paper, we propose a computationally economical algorithm forevolving \emph{unsupervised deep neural networks} to efficiently learn\emph{meaningful representations}, which is very suitable in the current BigData era where sufficient labeled data for training is often expensive toacquire. In the proposed algorithm, finding an appropriate architecture and theinitialized parameter values for a ML task at hand is modeled by onecomputational efficient gene encoding approach, which is employed toeffectively model the task with a large number of parameters. In addition, alocal search strategy is incorporated to facilitate the exploitation search forfurther improving the performance. Furthermore, a small proportion labeled datais utilized during evolution search to guarantee the learnt representations tobe meaningful. The performance of the proposed algorithm has been thoroughlyinvestigated over classification tasks. Specifically, error classification rateon MNIST with $1.15\%$ is reached by the proposed algorithm consistently, whichis a very promising result against state-of-the-art unsupervised DL algorithms.
展开▼